Goto

Collaborating Authors

 vital sign



Checklist 1. For all authors (a)

Neural Information Processing Systems

Do the main claims made in the abstract and introduction accurately reflect the paper's Did you describe the limitations of your work? Did you discuss any potential negative societal impacts of your work? If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Y es] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? Did you include the total amount of compute and the type of resources used (e.g., type Did you mention the license of the assets? Did you include any new assets either in the supplemental material or as a URL? [Y es] Did you discuss whether and how consent was obtained from people whose data you're If you used crowdsourcing or conducted research with human subjects... (a) This early warning score (EWS) dataset is built on real patient data recorded in hospitals.


PulseFi: A Low Cost Robust Machine Learning System for Accurate Cardiopulmonary and Apnea Monitoring Using Channel State Information

Kocheta, Pranay, Bhatia, Nayan Sanjay, Obraczka, Katia

arXiv.org Artificial Intelligence

Abstract--Non-intrusive monitoring of vital signs has become increasingly important in a variety of healthcare settings. In this paper, we present PulseFi, a novel low-cost non-intrusive system that uses Wi-Fi sensing and artificial intelligence to accurately and continuously monitor heart rate and breathing rate, as well as detect apnea events. PulseFi operates using low-cost commodity devices, making it more accessible and cost-effective. It uses a signal processing pipeline to process Wi-Fi telemetry data, specifically Channel State Information (CSI), that is fed into a custom low-compute Long Short-T erm Memory (LSTM) neural network model. We evaluate PulseFi using two datasets: one that we collected locally using ESP32 devices and another that contains recordings of 118 participants collected using the Raspberry Pi 4B, making the latter the most comprehensive data set of its kind. Our results show that PulseFi can effectively estimate heart rate and breathing rate in a seemless non-intrusive way with comparable or better accuracy than multiple antenna systems that can be expensive and less accessible. Non-intrusive monitoring of vital signs (such as heart, breathing rate, and sleep apnea) has become increasingly important, particularly for home care, elderly care, and managing chronic conditions. As the global population ages and chronic disease rates increase, there is a growing need for continuous and accurate vital sign monitoring systems that can be easily deployed across the healthcare continuum, including hospitals, long-term care and home care settings [1]. Breathing and heart rate provides critical information about an individual's respiratory and cardiovascular health. Furthermore, detection of apnea, characterized by temporary pauses in breathing (typically lasting 10 seconds or longer) [2], is critical as conditions like sleep apnea affect millions worldwide and can lead to serious health complications if undiagnosed [3]. Thus, non invasive monitoring of these cardiopulmonary variables is necessary. Traditional approaches for vital sign monitoring have relied heavily on contact-based sensors such as pulse oximeters, heart rate belts, chest straps, or highly specialized medical equipment, such as polysomnography (PSG) or electrocardiogram (ECG) devices.


REMONI: An Autonomous System Integrating Wearables and Multimodal Large Language Models for Enhanced Remote Health Monitoring

Ho, Thanh Cong, Kharrat, Farah, Abid, Abderrazek, Karray, Fakhri

arXiv.org Artificial Intelligence

With the widespread adoption of wearable devices in our daily lives, the demand and appeal for remote patient monitoring have significantly increased. Most research in this field has concentrated on collecting sensor data, visualizing it, and analyzing it to detect anomalies in specific diseases such as diabetes, heart disease and depression. However, this domain has a notable gap in the aspect of human-machine interaction. This paper proposes REMONI, an autonomous REmote health MONItoring system that integrates multimodal large language models (MLLMs), the Internet of Things (IoT), and wearable devices. The system automatically and continuously collects vital signs, accelerometer data from a special wearable (such as a smartwatch), and visual data in patient video clips collected from cameras. This data is processed by an anomaly detection module, which includes a fall detection model and algorithms to identify and alert caregivers of the patient's emergency conditions. A distinctive feature of our proposed system is the natural language processing component, developed with MLLMs capable of detecting and recognizing a patient's activity and emotion while responding to healthcare worker's inquiries. Additionally, prompt engineering is employed to integrate all patient information seamlessly. As a result, doctors and nurses can access real-time vital signs and the patient's current state and mood by interacting with an intelligent agent through a user-friendly web application. Our experiments demonstrate that our system is implementable and scalable for real-life scenarios, potentially reducing the workload of medical professionals and healthcare costs. A full-fledged prototype illustrating the functionalities of the system has been developed and being tested to demonstrate the robustness of its various capabilities.


Low-cost Embedded Breathing Rate Determination Using 802.15.4z IR-UWB Hardware for Remote Healthcare

Lambrecht, Anton, Luchie, Stijn, Fontaine, Jaron, Van Herbruggen, Ben, Shahid, Adnan, De Poorter, Eli

arXiv.org Artificial Intelligence

Respiratory diseases account for a significant portion of global mortality. Affordable and early detection is an effective way of addressing these ailments. To this end, a low-cost commercial off-the-shelf (COTS), IEEE 802.15.4z standard compliant impulse-radio ultra-wideband (IR-UWB) radar system is used to estimate human respiration rates. We propose a convolutional neural network (CNN) specifically adapted to predict breathing rates from ultra-wideband (UWB) channel impulse response (CIR) data, and compare its performance with both other rule-based algorithms and model-based solutions. The study uses a diverse dataset, incorporating various real-life environments to evaluate system robustness. To facilitate future research, this dataset will be released as open source. Results show that the CNN achieves a mean absolute error (MAE) of 1.73 breaths per minute (BPM) in unseen situations, significantly outperforming rule-based methods (3.40 BPM). By incorporating calibration data from other individuals in the unseen situations, the error is further reduced to 0.84 BPM. In addition, this work evaluates the feasibility of running the pipeline on a low-cost embedded device. Applying 8-bit quantization to both the weights and input/ouput tensors, reduces memory requirements by 67% and inference time by 64% with only a 3% increase in MAE. As a result, we show it is feasible to deploy the algorithm on an nRF52840 system-on-chip (SoC) requiring only 46 KB of memory and operating with an inference time of only 192 ms. Once deployed, an analytical energy model estimates that the system, while continuously monitoring the room, can operate for up to 268 days without recharging when powered by a 20 000 mAh battery pack. For breathing monitoring in bed, the sampling rate can be lowered, extending battery life to 313 days, making the solution highly efficient for real-world, low-cost deployments.


KIRETT: Smart Integration of Vital Signs Data for Intelligent Decision Support in Rescue Scenarios

Nadeem, Mubaris, Zenkert, Johannes, Weber, Christian, Bender, Lisa, Fathi, Madjid

arXiv.org Artificial Intelligence

The integration of vital signs in healthcare has witnessed a steady rise, promising health professionals to assist in their daily tasks to improve patient treatment. In life-threatening situations, like rescue operations, crucial decisions need to be made in the shortest possible amount of time to ensure that excellent treatment is provided during life-saving measurements. The integration of vital signs in the treatment holds the potential to improve time utilization for rescuers in such critical situations. They furthermore serve to support health professionals during the treatment with useful information and suggestions. To achieve such a goal, the KIRETT project serves to provide treatment recommendations and situation detection, combined on a wrist-worn wearable for rescue operations.This paper aims to present the significant role of vital signs in the improvement of decision-making during rescue operations and show their impact on health professionals and patients in need.


Mind the Missing: Variable-Aware Representation Learning for Irregular EHR Time Series using Large Language Models

Kwon, Jeong Eul, Yoon, Joo Heung, Lee, Hyo Kyung

arXiv.org Artificial Intelligence

Irregular sampling and high missingness are intrinsic challenges in modeling time series derived from electronic health records (EHRs), where clinical variables are measured at uneven intervals depending on workflow and intervention timing. To address this, we propose VITAL -- a variable-aware, large language model (LLM)-based framework tailored for learning from irregularly sampled physiological time series. VITAL differentiates between two distinct types of clinical variables: vital signs, which are frequently recorded and exhibit temporal patterns, and laboratory tests, which are measured sporadically and lack temporal structure. It reprograms vital signs into the language space, enabling the LLM to capture temporal context and reason over missing values through explicit encoding. In contrast, laboratory variables are embedded either using representative summary values or a learnable [Not measured] token, depending on their availability. Extensive evaluations on the benchmark datasets from the PhysioNet demonstrate that VITAL outperforms state-of-the-art methods designed for irregular time series. Furthermore, it maintains robust performance under high levels of missigness, which is prevalent in real-world clinical scenarios where key variables are often unavailable. Introduction Electronic Health Records (EHRs) digitally capture a wealth of patient data generated during routine clinical care. In particular, the Intensive Care Unit (ICU) is a data-rich environment due to the need for continuous, high-resolution patient monitoring. This has led to a surge of research in medical artificial intelligence (AI), with many studies leveraging publicly available EHR datasets in combination with machine learning techniques for tasks such as early warning, outcome prediction and patient stratification [1, 2, 3, 4, 5, 6, 7, 8, 9] A common approach in these studies is to model patient records as multivariate time series, capturing the temporal evolution of physiological and clinical variables. However, in practice, EHR time series are often irregularly sampled due to variations in clinical workflows, measurement protocols, and intervention timing.


Enhancing Clinical Decision-Making: Integrating Multi-Agent Systems with Ethical AI Governance

Chen, Ying-Jung, Albarqawi, Ahmad, Chen, Chi-Sheng

arXiv.org Artificial Intelligence

Abstract--Recent advances in the data-driven medicine approach, which integrates ethically managed and explainable artificial intelligence into clinical decision support systems (CDSS), are critical to ensure reliable and effective patient care. This paper focuses on comparing novel agent system designs that use modular agents to analyze laboratory results, vital signs, and clinical context, and to predict and validate results. We implement our agent system with the eICU database, including running lab analysis, vitals-only interpreters, and contextual reasoners agents first, then sharing the memory into the integration agent, prediction agent, transparency agent, and a validation agent. Our results suggest that the multi-agent system (MAS) performed better than the single-agent system (SAS) with mortality prediction accuracy (59%, 56%) and the mean error for length of stay (LOS)(4.37 days, 5.82 days), respectively. However, the transparency score for the SAS (86.21) is slightly better than the transparency score for MAS (85.5). Finally, this study suggests that our agent-based framework not only improves process transparency and prediction accuracy but also strengthens trustworthy AI-assisted decision support in an intensive care setting. Artificial intelligence (AI) has been widely adopt into healthcare [1]. Within medicine, it's proving valuable for sharpening diagnostic precision, supporting treatment planning, and helping clinicians take care of patients [2].


Towards Trustworthy Vital Sign Forecasting: Leveraging Uncertainty for Prediction Intervals

Wang, Li Rong, Henderson, Thomas C., Ong, Yew Soon, Ng, Yih Yng, Fan, Xiuyi

arXiv.org Artificial Intelligence

Vital signs, such as heart rate and blood pressure, are critical indicators of patient health and are widely used in clinical monitoring and decision-making. While deep learning models have shown promise in forecasting these signals, their deployment in healthcare remains limited in part because clinicians must be able to trust and interpret model outputs. Without reliable uncertainty quantification -- particularly calibrated prediction intervals (PIs) -- it is unclear whether a forecasted abnormality constitutes a meaningful warning or merely reflects model noise, hindering clinical decision-making. To address this, we present two methods for deriving PIs from the Reconstruction Uncertainty Estimate (RUE), an uncertainty measure well-suited to vital-sign forecasting due to its sensitivity to data shifts and support for label-free calibration. Our parametric approach assumes that prediction errors and uncertainty estimates follow a Gaussian copula distribution, enabling closed-form PI computation. Our non-parametric approach, based on k-nearest neighbours (KNN), empirically estimates the conditional error distribution using similar validation instances. We evaluate these methods on two large public datasets with minute- and hour-level sampling, representing high- and low-frequency health signals. Experiments demonstrate that the Gaussian copula method consistently outperforms conformal prediction baselines on low-frequency data, while the KNN approach performs best on high-frequency data. These results underscore the clinical promise of RUE-derived PIs for delivering interpretable, uncertainty-aware vital sign forecasts.


Checklist 1. For all authors (a)

Neural Information Processing Systems

Do the main claims made in the abstract and introduction accurately reflect the paper's Did you describe the limitations of your work? Did you discuss any potential negative societal impacts of your work? If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Y es] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? Did you include the total amount of compute and the type of resources used (e.g., type Did you mention the license of the assets? Did you include any new assets either in the supplemental material or as a URL? [Y es] Did you discuss whether and how consent was obtained from people whose data you're If you used crowdsourcing or conducted research with human subjects... (a) This early warning score (EWS) dataset is built on real patient data recorded in hospitals.